Improved stability and convergence with three factor learning
نویسندگان
چکیده
Donald Hebb postulated that if neurons fire together they wire together. However, Hebbian learning is inherently unstable because synaptic weights will self-amplify themselves: the more a synapse drives a postsynaptic cell the more the synaptic weight will grow. We present a new biologically realistic way of showing how to stabilise synaptic weights by introducing a third factor which switches learning on or off so that self-amplification is minimised. The third factor can be identified by the activity of dopaminergic neurons in ventral tegmental area which leads to a new interpretation of the dopamine signal which goes beyond the classical prediction error hypothesis. r 2006 Elsevier B.V. All rights reserved.
منابع مشابه
Cystoscopy Image Classication Using Deep Convolutional Neural Networks
In the past three decades, the use of smart methods in medical diagnostic systems has attractedthe attention of many researchers. However, no smart activity has been provided in the eld ofmedical image processing for diagnosis of bladder cancer through cystoscopy images despite the highprevalence in the world. In this paper, two well-known convolutional neural networks (CNNs) ...
متن کاملStable Rough Extreme Learning Machines for the Identification of Uncertain Continuous-Time Nonlinear Systems
Rough extreme learning machines (RELMs) are rough-neural networks with one hidden layer where the parameters between the inputs and hidden neurons are arbitrarily chosen and never updated. In this paper, we propose RELMs with a stable online learning algorithm for the identification of continuous-time nonlinear systems in the presence of noises and uncertainties, and we prove the global ...
متن کاملParameter Optimization Algorithm with Improved Convergence Properties for Adaptive Learning
The error in an artificial neural network is a function of adaptive parameters (weights and biases) that needs to be minimized. Research on adaptive learning usually focuses on gradient algorithms that employ problem–dependent heuristic learning parameters. This fact usually results in a trade–off between the convergence speed and the stability of the learning algorithm. The paper investigates ...
متن کاملA Higher Order Online Lyapunov-Based Emotional Learning for Rough-Neural Identifiers
o enhance the performances of rough-neural networks (R-NNs) in the system identification, on the base of emotional learning, a new stable learning algorithm is developed for them. This algorithm facilitates the error convergence by increasing the memory depth of R-NNs. To this end, an emotional signal as a linear combination of identification error and its differences is used to achie...
متن کاملAn Improved Particle Swarm Optimizer Based on a Novel Class of Fast and Efficient Learning Factors Strategies
The particle swarm optimizer (PSO) is a population-based metaheuristic optimization method that can be applied to a wide range of problems but it has the drawbacks like it easily falls into local optima and suffers from slow convergence in the later stages. In order to solve these problems, improved PSO (IPSO) variants, have been proposed. To bring about a balance between the exploration and ex...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neurocomputing
دوره 70 شماره
صفحات -
تاریخ انتشار 2007